51 research outputs found

    How many interchanges does the selection sort make for iid geometric(p) input?

    Full text link
    The note derives an expression for the number of interchanges made by selection sort when the sorting elements are iid variates from geometric distribution. Empirical results reveal we can work with a simpler model compared to what is suggestive in theory. The morale is that statistical analysis of an algorithm's complexity has something to offer in its own right and should be therefore ventured not with a predetermined mindset to verify what we already know in theory. Herein also lies the concept of an empirical O, a novel although subjective bound estimate over a finite input range obtained by running computer experiments. For an arbitrary algorithm, where theoretical results could be tedious, this could be of greater use.Comment: 10 page

    How robust is quicksort average complexity?

    Full text link
    The paper questions the robustness of average case time complexity of the fast and popular quicksort algorithm. Among the six standard probability distributions examined in the paper, only continuous uniform, exponential and standard normal are supporting it whereas the others are supporting the worst case complexity measure. To the question -why are we getting the worst case complexity measure each time the average case measure is discredited? -- one logical answer is average case complexity under the universal distribution equals worst case complexity. This answer, which is hard to challenge, however gives no idea as to which of the standard probability distributions come under the umbrella of universality. The morale is that average case complexity measures, in cases where they are different from those in worst case, should be deemed as robust provided only they get the support from at least the standard probability distributions, both discrete and continuous. Regretfully, this is not the case with quicksort.Comment: 15 pages;12figures;2 table

    On an algorithm that generates an interesting maximal set P(n) of the naturals for any n greater than or equal to 2

    Full text link
    The paper considers the problem of finding the largest possible set P(n), a subset of the set N of the natural numbers, with the property that a number is in P(n) if and only if it is a sum of n distinct naturals all in P(n) or none in P(n). Here largest is in the set theoretic sense and n is greater than or equal to 2. We call P(n) a maximal set obeying this property. For small n say 2 or 3, it is possible to develop P(n) intuitively but we strongly felt the necessity of an algorithm for any n greater than or equal to 2. Now P(n) shall invariably be a infinite set so we define another set Q(n) such that Q(n)=N-P(n), prove that Q(n) is finite and, since P(n) is automatically known if Q(n) is known, design an algorithm of worst case O(1) complexity which generates Q(n).Comment: There are some problems with the page numbering. I could not remove the unwanted page numbers! please read the pages one after another as they come. There are 11 pages in al

    On Why and What of Randomness

    Full text link
    This paper has several objectives. First, it separates randomness from lawlessness and shows why even genuine randomness does not imply lawlessness. Second, it separates the question -why should I call a phenomenon random? (and answers it in part one) from the patent question -What is a random sequence? -for which the answer lies in Kolmogorov complexity (which is explained in part two). While answering the first question the note argues why there should be four motivating factors for calling a phenomenon random: ontic, epistemic, pseudo and telescopic, the first two depicting genuine randomness and the last two false. Third, ontic and epistemic randomness have been distinguished from ontic and epistemic probability. Fourth, it encourages students to be applied statisticians and advises against becoming armchair theorists but this is interestingly achieved by a straight application of telescopic randomness. Overall, it tells (the teacher) not to jump to probability without explaining randomness properly first and similarly advises the students to read (and understand) randomness minutely before taking on probability.Comment: This article will definitely assist both teaching and research(esp on the interface of statistics and computing

    The Parameterized Complexity Analysis of Partition Sort for Negative Binomial Distribution Inputs

    Full text link
    The present paper makes a study on Partition sort algorithm for negative binomial inputs. Comparing the results with those for binomial inputs in our previous work, we find that this algorithm is sensitive to parameters of both distributions. But the main effects as well as the interaction effects involving these parameters and the input size are more significant for negative binomial case

    A Statistical Peek into Average Case Complexity

    Full text link
    The present paper gives a statistical adventure towards exploring the average case complexity behavior of computer algorithms. Rather than following the traditional count based analytical (pen and paper) approach, we instead talk in terms of the weight based analysis that permits mixing of distinct operations into a conceptual bound called the statistical bound and its empirical estimate, the so called "empirical O". Based on careful analysis of the results obtained, we have introduced two new conjectures in the domain of algorithmic analysis. The analytical way of average case analysis falls flat when it comes to a data model for which the expectation does not exist (e.g. Cauchy distribution for continuous input data and certain discrete distribution inputs as those studied in the paper). The empirical side of our approach, with a thrust in computer experiments and applied statistics in its paradigm, lends a helping hand by complimenting and supplementing its theoretical counterpart. Computer science is or at least has aspects of an experimental science as well, and hence hopefully, our statistical findings will be equally recognized among theoretical scientists as well

    Partition Sort Revisited: Reconfirming the Robustness in Average Case and much more!

    Full text link
    In our previous work there was some indication that Partition Sort could be having a more robust average case O(nlogn) complexity than the popular Quick Sort. In our first study in this paper, we reconfirm this through computer experiments for inputs from Cauchy distribution for which expectation theoretically does not exist. Additionally, the algorithm is found to be sensitive to parameters of the input probability distribution demanding further investigation on parameterized complexity. The results on this algorithm for Binomial inputs in our second study are very encouraging in that direction.Comment: 8 page

    Smart Sort: Design and Analysis of a Fast, Efficient and Robust Comparison Based Internal Sort Algorithm

    Full text link
    Smart Sort algorithm is a "smart" fusion of heap construction procedures (of Heap sort algorithm) into the conventional "Partition" function (of Quick sort algorithm) resulting in a robust version of Quick sort algorithm. We have also performed empirical analysis of average case behavior of our proposed algorithm along with the necessary theoretical analysis for best and worst cases. Its performance was checked against some standard probability distributions, both uniform and non-uniform, like Binomial, Poisson, Discrete & Continuous Uniform, Exponential, and Standard Normal. The analysis exhibited the desired robustness coupled with excellent performance of our algorithm. Although this paper assumes the static partition ratios, its dynamic version is expected to yield still better results

    Parameterized Complexity on a New Sorting Algorithm: A Study in Simulation

    Full text link
    Sundararajan and Chakraborty (2007) introduced a new sorting algorithm by modifying the fast and popular Quick sort and removing the interchanges. In a subsequent empirical study, Sourabh, Sundararajan and Chakraborty (2007) demonstrated that this algorithm sorts inputs from certain probability distributions faster than others and the authors made a list of some standard probability distributions in decreasing order of speed, namely, Continuous uniform < Discrete uniform < Binomial < Negative Binomial < Poisson < Geometric < Exponential < Standard Normal. It is clear from this interesting second study that the algorithm is sensitive to input probability distribution. Based on these pervious findings, in the present paper we are motivated to do some further study on this sorting algorithm through simulation and determine the appropriate empirical model which explains its average sorting time with special emphasis on parameterized complexity.Comment: 14 page

    Simulating Raga Notes with a Markov Chain of Order 1-2

    Full text link
    Semi Natural Algorithmic composition (SNCA) is the technique of using algorithms to create music note sequences in computer with the understanding that how to render them would be decided by the composer. In our approach we are proposing an SNCA2 algorithm (extension of SNCA algorithm) with an illustrative example in Raga Bageshree. For this, Transition probability matrix (tpm) was created for the note sequences of Raga Bageshree, then first order Markov chain (using SNCA) and second order Markov chain (using SNCA2) simulations were performed for generating arbitrary sequences of notes of Raga Bageshree. The choice between first and second order Markov model, is best left to the composer who has to decide how to render these music notes sequences. We have confirmed that Markov chain of order of three and above are not promising, as the tpm of these become sparse matrices
    • …
    corecore